extrepo
, and then running extrepo enable <reponame>
, where <reponame>
is the name of the repository.
Note that the list is not exhaustive, but I intend to show that even though we're nowhere near complete, extrepo
is already quite useful in its current state:
debian_official
, debian_backports
, and debian_experimental
repositories contain Debian's official, backports, and experimental repositories, respectively. These shouldn't have to be managed through extrepo
, but then again it might be useful for someone, so I decided to just add them anyway. The config here uses the deb.debian.org
alias for CDN-backed package mirrors.belgium_eid
repository contains the Belgian eID software. Obviously this is added, since I'm upstream for eID, and as such it was a large motivating factor for me to actually write extrepo in the first place.elastic
: the elasticsearch software.dovecot
, winehq
and bareos
contain upstream versions of their respective software. These two repositories contain software that is available in Debian, too; but their upstreams package their most recent release independently, and some people might prefer to run those instead.sury
, fai
, and postgresql
repositories, as well as a number of repositories such as openstack_rocky
, openstack_train
, haproxy-1.5
and haproxy-2.0
(there are more) contain more recent versions of software packaged in Debian already by the same maintainer of that package repository. For the sury
repository, that is PHP; for the others, the name should give it away.
The difference between these repositories and the ones above is that it is the official Debian maintainer for the same software who maintains the repository, which is not the case for the others.vscodium
repository contains the unencumbered version of Microsoft's Visual Studio Code; i.e., the codium
version of Visual Studio Code is to code
as the chromium
browser is to chrome
: it is a build of the same softare, but without the non-free bits that make code
not entirely Free Software.extrepo
, too. The iridiumbrowser
repository contains a Chromium-based browser that focuses on privacy.torproject
repository.kubernetes
repository that contains the Kubernetes stack, the as well as the google_cloud
one containing the Google Cloud SDK.extrepo
, please note that non-free
and contrib
repositories are disabled by default. In order to enable these repositories, you must first enable them; this can be accomplished through /etc/extrepo/config.yaml
.
vscode
repository contains it.msteams
repository. And, hey, skype
.opera
and google_chrome
.docker-ce
repository contains the official build of Docker CE. While this is the free "community edition" that should have free licenses, I could not find a licensing statement anywhere, and therefore I'm not 100% sure whether this repository is actually free software. For that reason, it is currently marked as a non-free one. Merge Requests for rectifying that from someone with more information on the actual licensing situation of Docker CE would be welcome...steam
repository.This is to say that anyone will be able to independently review Threema s security and verify that the published source code corresponds to the downloaded app.You can view the full announcement on Threema s website.
The previous year has seen great progress in Arch Linux to get reproducible builds in the hands of the users and developers. In this talk we will explore the current tooling that allows users to reproduce packages, the rebuilder software that has been written to check packages and the current issues in this space.During the Reproducible Builds summit in Marrakesh, GNU Guix, NixOS and Debian were able to produce a bit-for-bit identical binary when building GNU Mes, despite using three different major versions of GCC. Since the summit, additional work resulted in a bit-for-bit identical Mes binary using
tcc
and this month, a fuller update was posted by the individuals involved.
cfingerd
(#831021), grap
(#870573), splint
(#924003) & schroot
(#902804)
Last month, an issue was identified where a large number of Debian .buildinfo
build certificates had been tainted on the official Debian build servers, as these environments had files underneath the /usr/local/sbin
directory to prevent the execution of system services during package builds. However, this month, Aurelien Jarno and Wouter Verhelst fixed this issue in varying ways, resulting in a special policy-rcd-declarative-deny-all
package.
Building on Chris Lamb s previous work on reproducible builds for Debian .ISO images, Roland Clobus announced his work in progress on making the Debian Live images reproducible. [ ]
Lucas Nussbaum performed an archive-wide rebuild of packages to test enabling the reproducible=+fixfilepath
Debian build flag by default. Enabling the fixfilepath
feature will likely fix reproducibility issues in an estimated 500-700 packages. The test revealed only 33 packages (out of 30,000 in the archive) that fail to build with fixfilepath
. Many of those will be fixed when the default LLVM/Clang version is upgraded.
79 reviews of Debian packages were added, 23 were updated and 17 were removed this month adding to our knowledge about identified issues. Chris Lamb added and categorised a number of new issue types, including packages that captures their build path via quicktest.h
and absolute build directories in documentation generated by Doxygen , etc.
Lastly, Lukas Puehringer s uploaded a new version of the in-toto to Debian which was sponsored by Holger Levsen. [ ]
159
and 160
to Debian:
pgpdump
, and check that the associated binary is actually installed before attempting to run it. (#969753)guestfs
cleanup failure. [ ]FALLBACK_FILE_EXTENSION_SUFFIX
, otherwise we run pgpdump
against all files that are recognised by file(1)
as data
. [ ]jekyll-polyglot
package is required [ ]. Lastly, diffoscope.org
and reproducible-builds.org
were transferred to Software Freedom Conservancy. Many thanks to Brett Smith from Conservancy, J r my Bobbio (lunar) and Holger Levsen for their help with transferring and to Mattia Rizzolo for initiating this.
cfn-python-lint
(build failure)clutter
(avoid a random ID in HTML from xsltproc
)kubernetes
(1-bit order in manual page)libint
(merged, filesystem order)libmysofa
(disable -fprofile-arcs
and code coverage)libnet
(merged, date)libqb
(date / copyright)libsemigroups
(CPU detection)nauty
(CPU type detection)git2-rs
(sort return ordering of readdir(3)
)git2-rs
, pyftpdlib
, python-nbclient
, python-pyzmq
& python-sidpy
.
tests.reproducible-builds.org
. This month, Holger Levsen made the following changes:
usrmerge
. [ ]systemctl
status [ ] and the number of diffoscope processes [ ].xz
compression format. [ ][ ][ ]schroot
sessions after 2 days, not 3. [ ]schroot
sessions. [ ]#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
== On-premise == == Online ==
Camera 1 Jitsi
v ---> Frontend v ---> Frontend
Slides -> Voctomix -> Backend -+--> Frontend Questions -> Voctomix -> Backend -+--> Frontend
^ ---> Frontend ^ ---> Frontend
Camera 2 Pre-recorded video
HLS | RTMP | |
---|---|---|
Pros |
|
|
Cons |
|
|
live.debconf.org
and redirect connections to the nearest server.
Sadly, 6 months ago MaxMind decided to change the licence on their
GeoLite2 database and left us scrambling. To fix this annoying issue, Stefano
Rivera wrote a Python program that uses the new database and reworked our
ansible frontend server role. Since the new database cannot be
redistributed freely, you'll have to get a (free) license key from MaxMind if
you to use this role.
Ansible & CI improvements
Infrastructure as code is a living process and needs constant care to fix bugs,
follow changes in DSL and to implement new features. All that to say a large
part of the sprint was spent making our ansible roles and continuous
integration setup more reliable, less buggy and more featureful.
All in all, we merged 26 separate ansible-related merge request during the
sprint! As always, if you are good with ansible and wish to help, we accept
merge requests on our ansible repository :)
Time (UTC) | Speaker | Talk |
---|---|---|
11:00 - 11:10 | MDCO team members | Hello + Welcome |
11:30 - 11:50 | Wouter Verhelst | Extrepo |
12:00 - 12:45 | JP Mengual | Debian France, trust european organization |
13:00 - 13:20 | Arnaud Ferraris | Bringing Debian to mobile phones, one package at a time |
13:30 - 15:00 | Lunch Break | A chance for the teams to catch some air |
15:00 - 15:45 | JP Mengual | The community team, United Nations Organizations of Debian? |
16:00 - 16:45 | Christoph Biedl | Clevis and tang - overcoming the disk unlocking problem |
17:00 - 17:45 | Antonio Terceiro | I m a programmer, how can I help Debian |
Time (UTC) | Speaker | Talk |
---|---|---|
11:00 - 11:45 | Andreas Tille | The effect of Covid-19 on the Debian Med project |
12:00 - 12:45 | Paul Gevers | BoF: running autopkgtest for your package |
13:00 - 13:20 | Ben Hutchings | debplate: Build many binary packages with templates |
13:30 - 15:00 | Lunch break | A chance for the teams to catch some air |
15:00 - 15:45 | Holger Levsen | Reproducing bullseye in practice |
16:00 - 16:45 | Jonathan Carter | Striving towards excellence |
17:00 - 17:45 | Delib* | Organizing Peer-to-Peer Debian Facilitation Training |
18:00 - 18:15 | MDCO team members | Closing |
#minidebconf-online
IRC channel in OFTC.Time (UTC) | Speaker | Talk |
---|---|---|
11:00 - 11:10 | MDCO team members | Hello + Welcome |
11:30 - 11:50 | Wouter Verhelst | Extrepo |
12:00 - 12:45 | JP Mengual | Debian France, trust european organization |
13:00 - 13:20 | Arnaud Ferraris | Bringing Debian to mobile phones, one package at a time |
13:30 - 15:00 | Lunch Break | A chance for the teams to catch some air |
15:00 - 15:45 | JP Mengual | The community team, United Nations Organizations of Debian? |
16:00 - 16:45 | Christoph Biedl | Clevis and tang - overcoming the disk unlocking problem |
17:00 - 17:45 | Antonio Terceiro | I m a programmer, how can I help Debian |
Time (UTC) | Speaker | Talk |
---|---|---|
11:00 - 11:45 | Andreas Tille | The effect of Covid-19 on the Debian Med project |
12:00 - 12:45 | Paul Gevers | BoF: running autopkgtest for your package |
13:00 - 13:20 | Ben Hutchings | debplate: Build many binary packages with templates |
13:30 - 15:00 | Lunch break | A chance for the teams to catch some air |
15:00 - 15:45 | Holger Levsen | Reproducing bullseye in practice |
16:00 - 16:45 | Jonathan Carter | Striving towards excellence |
17:00 - 17:45 | Delib* | Organizing Peer-to-Peer Debian Facilitation Training |
18:00 - 18:15 | MDCO team members | Closing |
#minidebconf-online
IRC channel in OFTC.extrepo enable
, you will get the new key rather
than the old one. On top of that, if you had already enabled the
repository through extrepo
, all that is needed for you right now to
pull in the new key is to run extrepo update
.
While I do apologise for the late update, hopefully this should make
some people's lives a bit easier.
And if GitLab B.V. reads this: please send me a MR to the repository
next time, so that we can make process be done in time
DPL Campaign 2020 On the 12th of March, I posted my self-nomination for the Debian Project Leader election. This is the second time I m running for DPL, and you can read my platform here. The campaign period covered the second half of the month, where I answered a bunch of questions on the debian-vote list. The voting period is currently open and ends on 18 April.
Debian Social This month we finally announced the Debian Social project. A project that hosts a few websites with the goal to improve communication and collaboration within the Debian project, improve visibility on the work that people do and make it easier for general users to interact with the community and feel part of the project. Some History This has been a long time in the making. From my side I ve been looking at better ways to share/play our huge DebConf video archives for the last 3 years or so. Initially I was considering either some sort of script or small server side app that combined the archives and the metadata into a player, or using something like MediaDrop (which I was using on my highvoltage.tv website for a while). I ran into a lot of MediaDrop s limitations early on. It was fine for a very small site but I don t think it would ever be the right solution for a Debian-wide video hosting platform, and it didn t seem all that actively maintained either. Wouter went ahead and implemented a web player option for the video archives. His solution is good because it doesn t rely on any server side software, so it s easy to mirror and someone who lives on an island could download it and view it offline in that player. It still didn t solve all our problems though. Popular videos (by either views or likes) weren t easily discoverable, and the site itself isn t that easy to discover. Then PeerTube came along. PeerTube provides a similar type of interface such as MediaDrop or YouTube that gives you likes, viewcount and comments. But what really set it apart from previous things that we looked at was that it s a federated service. Not only does it federate with other PeerTube instances, but the protocols it uses means that it can connect to all kinds of other services that makes up an interconnected platform called the Fediverse. This was especially great since independent video sites tend to become these lonely islands on the web that become isolated and forgotten. With PeerTube, video sites can subscribe to similar sites on the Fediverse, which makes videos and other video sites significantly more discoverable and attracts more eyeballs. At DebConf19 I wanted to ramp up the efforts to make a Debian PeerTube instance a reality. I spoke to many people about this and discovered that some Debianites are already making all kinds of Debian videos in many different languages. Some were even distributing them locally on DVD and have never uploaded them. I thought that the Debian PeerTube instance could not only be a good platform for DebConf videos, but it could be a good home for many free software content creators, especially if they create Debian specific content. I spoke to Rhonda about it, who s generally interested in the Fediverse and wanted to host a instances of Pleroma (microblogging service) and PixelFed (free image hosting service that resembles the Instagram site), but needed a place to host them. We decided to combine efforts, and since a very large amount of fediverse services end with .social in their domain names, we ended up calling this project Debian Social. We re also hosting some non-fediverse services like a WordPress multisite and a Jitsi instance for video chatting. Current Status Currently, we have a few services in a beta/testing state. I think we have most of the kinks sorted out to get them to a phase where they re ready for wider use. Authentication is a bit of a pain point right now. We don t really have a single sign-on service in Debian, that guest users can use, or that all these services integrate with. So for now, if you re a Debian Developer who wants an account on one of these services, you can request a new account by creating a ticket on salsa.debian.org and selecting the New account template. Not all services support having dashes (or even any punctuation in the username whatsoever), so to keep it consistent we re currently appending just guest to salsa usernames for guest users, and team at the end of any Debian team accounts or official accounts using these services Stefano finished uploading all the Debconf videos to the PeerTube instance. Even though it s largely automated, it ended up being quite a big job fixing up some old videos, their metadata and adding support for PeerTube to the DebConf video scripts. This also includes some videos from sprints and MiniDebConfs that had video coverage, currently totaling 1359 videos. Future plans This is still a very early phase for the project. Here are just some ideas that might develop over time on the Debian Social sites:
Debian packaging I had the sense that there were fewer upstream releases this month. I suspect that everyone was busy figuring out how to cope during Covid-19 lockdowns taking place all over the world. 2020-03-02: Upload package calamares (3.2.10-1) to Debian unstable. 2020-03-10: Upload package gnome-shell-extension-dash-to-panel (29-1) to Debian unstable. 2020-03-10: Upload package gnome-shell-extension-draw-on-your-screen (5.1-1) to Debian unstable. 2020-03-28: Upload package gnome-shell-extension-dash-to-panel (31-1) to Debian unstable. 2020-03-28: Upload package gnome-shell-extension-draw-on-your-screen (6-1) to Debian unstable. 2020-03-28: Update package python3-flask-autoindexing packaging, not releasing due to licensing change that needs further clarification. (GitHub issue #55). 2020-03-28: Upload package gamemode (1.5.1-1) to Debian unstable. 2020-03-28: Upload package calamares (3.2.21-1) to Debian unstable.
Debian mentoring 2020-03-03: Sponsor package python-jaraco.functools (3.0.0-1) (Python team request). 2020-03-03: Review python-ftputil (3.4-1) (Needs some more work) (Python team request). 2020-03-04: Sponsor package pythonmagick (0.9.19-6) for Debian unstable (Python team request). 2020-03-23: Sponsor package bitwise (0.41-1) for Debian unstable (Email request). 2020-03-23: Sponsor package gpxpy (1.4.0-1) for Debian unstable (Python team request). 2020-03-28: Sponsor package gpxpy (1.4.0-2) for Debian unstable (Python team request). 2020-03-28: Sponsor package celery (4.4.2-1) for Debian unstable (Python team request). 2020-03-28: Sponsor package buildbot (2.7.0-1) for Debian unstable (Python team request).
Belgians This month started off in Belgium for FOSDEM on 1-2 February. I attended FOSDEM in Brussels and wrote a separate blog entry for that. The month ended with Belgians at Tammy and Wouter s wedding. On Thursday we had Wouter s bachelors and then over the weekend I stayed over at their wedding venue. I thought that other Debianites might be interested so I m sharing some photos here with permission from Wouter. It was the only wedding I ve been at where nearly everyone had questions about Debian! I first met Wouter on the bus during the daytrip on DebConf12 in Nicaragua, back then I ve eagerly followed the Debianites on Planet Debian for a while so it was like meeting someone famous. Little did I know that 8 years later, I d be at his wedding back in my part of the world. If you went to DebConf16 in South Africa, you might remember Tammy, who have done a lot of work for DC16 including most of the artwork, bunch of website work, design on the badges, bags, etc and also did a lot of organisation for the day trips. Tammy and Wouter met while Tammy was reviewing the artwork in the video loops for the DebConf videos, and then things developed from there. Wouter s Bachelors Wouter was blindfolded and kidnapped and taken to the city center where we prepared to go on a bike tour of Cape Town, stopping for beer at a few places along the way. Wouter was given a list of tasks that he had to complete, or the wedding wouldn t be allowed to continue
The Wedding Friday afternoon we arrived at the lodge for the weekend. I had some work to finish but at least this was nicer than where I was going to work if it wasn t for the wedding. When the wedding co-ordinators started setting up, I noticed that there were all these swirls that almost looked like Debian logos. I asked Wouter if that was on purpose or just a happy accident. He said Hmm! I haven t even noticed that yet! , didn t get a chance to ask Tammy yet, so it could still be her touch. Kyle and I weren t the only ones out on the river that day. When the wedding ceremony started, Tammy made a dramatic entrance coming in on a boat, standing at the front with the breeze blowing her dress like a valkyrie. Congratulations again to both Tammy and Wouter. It was a great experience meeting both their families and friends and all the love that was swirling around all weekend.Debian Package Uploads 2020-02-07: Upload package calamares (3.2.18-1) to Debian unstable. 2020-02-07: Upload package python-flask-restful (0.3.8-1) to Debian unstable. 2020-02-10: Upload package kpmcore (4.1.0-1) to Debian unstable. 2020-02-16: Upload package fracplanet (0.5.1-5.1) to Debian unstable (Closes: #946028). 2020-02-20: Upload package kpmcore (4.1.0-2) to Debian unstable. 2020-02-20: Upload package bluefish (2.2.11) to Debian unstable. 2020-02-20: Upload package gdisk (1.0.5-1) to Debian unstable. 2020-02-20: Accept MR#6 for gamemode. 2020-02-23: Upload package tanglet (1.5.5-1) to Debian unstable. 2020-02-23: Upload package gamemode (1.5-1) to Debian unstable. 2020-02-24: Upload package calamares (3.2.19-1) to Debian unstable. 2020-02-24: Upload package partitionmanager (4.1.0-1) to Debian unstable. 2020-02-24: Accept MR#7 for gamemode. 2020-02-24: Merge MR#1 for calcoo. 2020-02-24: Upload package calcoo (1.3.18-8) to Debian unstable. 2020-02-24: Merge MR#1 for flask-api. 2020-02-25: Upload package calamares (3.2.19.1-1) to Debian unstable. 2020-02-25: Upload package gnome-shell-extension-impatience (0.4.5-4) to Debian unstable. 2020-02-25: Upload package gnome-shell-extension-harddisk-led (19-2) to Debian unstable. 2020-02-25: Upload package gnome-shell-extension-no-annoyance (0+20170928-f21d09a-2) to Debian unstable. 2020-02-25: Upload package gnome-shell-extension-system-monitor (38-2) to Debian unstable. 2020-02-25: Upload package tuxpaint (0.9.24~git20190922-f7d30d-1~exp3) to Debian experimental.
Debian Mentoring 2020-02-10: Sponsor package python-marshmallow-polyfield (5.8-1) for Debian unstable (Python team request). 2020-02-10: Sponsor package geoalchemy2 (0.6.3-2) for Debian unstable (Python team request). 2020-02-13: Sponsor package python-tempura (2.2.1-1) for Debian unstable (Python team request). 2020-02-13: Sponsor package python-babel (2.8.0+dfsg.1-1) for Debian unstable (Python team request). 2020-02-13: Sponsor package python-pynvim (0.4.1-1) for Debian unstable (Python team request). 2020-02-13: Review package ledmon (0.94-1) (Needs some more work) (mentors.debian.net request). 2020-02-14: Sponsor package citeproc-py (0.3.0-6) for Debian unstable (Python team request). 2020-02-24: Review package python-suntime (1.2.5-1) (Needs some more work) (Python team request). 2020-02-24: Sponsor package python-babel (2.8.0+dfsg.1-2) for Debian unstable (Python team request). 2020-02-24: Sponsor package 2048 (0.0.0-1~exp1) for Debian experimental (mentors.debian.net request). 2020-02-24: Review package notcurses (1.1.8-1) (Needs some more work) (mentors.debian.net request). 2020-02-25: Sponsor package cloudpickle (1.3.0-1) for Debian unstable (Python team request).
Debian Misc 2020-02-12: Apply Planet Debian request and close MR#21. 2020-02-23: Accept MR#6 for ToeTally (DebConf Video team upstream). 2020-02-23: Accept MR#7 for ToeTally (DebConf Video team upstream).
sreview-master
, sreview-encoder
,
sreview-detect
, and sreview-web
. It's possible to install the four
packages on different machines, but let's not go into too much detail
there, yet.sreview-config --action=dump
. This will show you the current
configuration of sreview. If you want to change something, either
change it in /etc/sreview/config.pm, or just run sreview-config
--set=variable=value --action=update
.sreview-user -d --action=create -u <your email>
. This will
create an administrator user in the sreview database.http://localhost:8080/
, and test
whether you can log on.state_actions
configuration parameter (e.g., by way of
sreview-config --action=update --set=state_actions=...
or by editing
/etc/sreview/config.pm).state_actions
entry for
notification so that it sends out a notification (e.g., through an IRC
bot or an email address, or something along those lines).
Alternatively, enable the "anonreviews" option, so that the overview
page has links to your talk.inputglob
and parse_re
configuration parameters of
SReview. The first should contain a filesystem glob that will find your raw
assets; the second should parse the filename into room, year, month, day,
hour, minute, and second, components. Look at the defaults of those options
for examples (or just use those, and store your files as
/srv/sreview/incoming/<room>/<year>-<month>-<day>/<hour>:<minute>:<second>.*
).preroll_template
configuration option.postroll_template
resp postroll
configuration option.SC
) but also an
encryption subkey (marked E
), a separate signature key (S
), and two
authentication keys (marked A
) which I use as RSA keys to log into
servers using SSH, thanks to the
Monkeysphere project.
pub rsa4096/792152527B75921E 2009-05-29 [SC] [expires: 2018-04-19]
8DC901CE64146C048AD50FBB792152527B75921E
uid [ultimate] Antoine Beaupr <anarcat@anarc.at>
uid [ultimate] Antoine Beaupr <anarcat@koumbit.org>
uid [ultimate] Antoine Beaupr <anarcat@orangeseeds.org>
uid [ultimate] Antoine Beaupr <anarcat@debian.org>
sub rsa2048/B7F648FED2DF2587 2012-07-18 [A]
sub rsa2048/604E4B3EEE02855A 2012-07-20 [A]
sub rsa4096/A51D5B109C5A5581 2009-05-29 [E]
sub rsa2048/3EA1DDDDB261D97B 2017-08-23 [S]
All the subkeys (sub
) and identities (uid
) are bound by the main
certification key using cryptographic self-signatures. So while an
attacker stealing a private subkey can spoof signatures in my name or
authenticate to other servers, that key can always be revoked by the
main certification key. But if the certification key gets stolen, all
bets are off: the attacker can create or revoke identities or subkeys as
they wish. In a catastrophic scenario, an attacker could even steal the
key and remove your copies, taking complete control of the key, without
any possibility of recovery. Incidentally, this is why it is so
important to generate a revocation certificate and store it offline.
So by moving the certification key offline, we reduce the attack surface
on the OpenPGP trust chain: day-to-day keys (e.g. email encryption or
signature) can stay online but if they get stolen, the certification key
can revoke those keys without having to revoke the main certification
key as well. Note that a stolen encryption key is a different problem:
even if we revoke the encryption subkey, this will only affect future
encrypted messages. Previous messages will be readable by the attacker
with the stolen subkey even if that subkey gets revoked, so the benefits
of revoking encryption certificates are more limited.
--iter-time
argument when creating
a LUKS partition to increase key-derivation delay, which makes
brute-forcing much harder. Indeed, GnuPG 2.x doesn't
have a run-time option to configure the
key-derivation algorithm, although a
patch was introduced recently to make the
delay configurable at compile time in gpg-agent
, which is now
responsible for all secret key operations.
The downside of external volumes is complexity: GnuPG makes it difficult
to extract secrets out of its keyring, which makes the first setup
tricky and error-prone. This is easier in the 2.x series thanks to the
new storage system and the associated keygrip
files, but it still
requires arcane knowledge of GPG internals. It is also inconvenient to
use secret keys stored outside your main keyring when you actually do
need to use them, as GPG doesn't know where to find those keys anymore.
Another option is to set up a separate air-gapped system to perform
certification operations. An example is the PGP clean
room project,
which is a live system based on Debian and designed by DD Daniel Pocock
to operate an OpenPGP and X.509 certificate authority using commodity
hardware. The basic principle is to store the secrets on a different
machine that is never connected to the network and, therefore, not
exposed to attacks, at least in theory. I have personally discarded that
approach because I feel air-gapped systems provide a false sense of
security: data eventually does need to come in and out of the system,
somehow, even if only to propagate signatures out of the system, which
exposes the system to attacks.
System updates are similarly problematic: to keep the system secure,
timely security updates need to be deployed to the air-gapped system. A
common use pattern is to share data through USB keys, which introduce a
vulnerability where attacks like
BadUSB can infect the air-gapped
system. From there, there is a multitude of exotic ways of exfiltrating
the data using
LEDs,
infrared
cameras,
or the good old
TEMPEST
attack. I therefore concluded the complexity tradeoffs of an air-gapped
system are not worth it. Furthermore, the workflow for air-gapped
systems is complex: even though PGP clean room went a long way, it's
still lacking even simple scripts that allow signing or transferring
keys, which is a problem shared by the external LUKS storage approach.
keytocard
command
in the --edit-key
interface), whereas moving private key material to a
LUKS-encrypted device or air-gapped computer is more complex.
Keycards are also useful if you operate on multiple computers. A common
problem when using GnuPG on multiple machines is how to safely copy and
synchronize private key material among different devices, which
introduces new security problems. Indeed, a "good rule of thumb in a
forensics lab",
according
to Robert J. Hansen on the GnuPG mailing list, is to "store the minimum
personal data possible on your systems". Keycards provide the best of
both worlds here: you can use your private key on multiple computers
without actually storing it in multiple places. In fact, Mike Gerwitz
went as far as
saying:
For users that need their GPG key on multiple boxes, I consider a smartcard to be essential. Otherwise, the user is just furthering her risk of compromise.
Smartcards are useful. They ensure that the private half of your key is never on any hard disk or other general storage device, and therefore that it cannot possibly be stolen (because there's only one possible copy of it). Smartcards are a pain in the ass. They ensure that the private half of your key is never on any hard disk or other general storage device but instead sits in your wallet, so whenever you need to access it, you need to grab your wallet to be able to do so, which takes more effort than just firing up GnuPG. If your laptop doesn't have a builtin cardreader, you also need to fish the reader from your backpack or wherever, etc."Smartcards" here refer to older OpenPGP cards that relied on the IEC 7816 smartcard connectors and therefore needed a specially-built smartcard reader. Newer keycards simply use a standard USB connector. In any case, it's true that having an external device introduces new issues: attackers can steal your keycard, you can simply lose it, or wash it with your dirty laundry. A laptop or a computer can also be lost, of course, but it is much easier to lose a small USB keycard than a full laptop and I have yet to hear of someone shoving a full laptop into a washing machine. When you lose your keycard, unless a separate revocation certificate is available somewhere, you lose complete control of the key, which is catastrophic. But, even if you revoke the lost key, you need to create a new one, which involves rebuilding the web of trust for the key a rather expensive operation as it usually requires meeting other OpenPGP users in person to exchange fingerprints. You should therefore think about how to back up the certification key, which is a problem that already exists for online keys; of course, everyone has a revocation certificates and backups of their OpenPGP keys... right? In the keycard scenario, backups may be multiple keycards distributed geographically. Note that, contrary to an air-gapped system, a key generated on a keycard cannot be backed up, by design. For subkeys, this is not a problem as they do not need to be backed up (except encryption keys). But, for a certification key, this means users need to generate the key on the host and transfer it to the keycard, which means the host is expected to have enough entropy to generate cryptographic-strength random numbers, for example. Also consider the possibility of combining different approaches: you could, for example, use a keycard for day-to-day operation, but keep a backup of the certification key on a LUKS-encrypted offline volume. Keycards introduce a new element into the trust chain: you need to trust the keycard manufacturer to not have any hostile code in the key's firmware or hardware. In addition, you need to trust that the implementation is correct. Keycards are harder to update: the firmware may be deliberately inaccessible to the host for security reasons or may require special software to manipulate. Keycards may be slower than the CPU in performing certain operations because they are small embedded microcontrollers with limited computing power. Finally, keycards may encourage users to trust multiple machines with their secrets, which works against the "minimum personal data" principle. A completely different approach called the trusted physical console (TPC) does the opposite: instead of trying to get private key material onto all of those machines, just have them on a single machine that is used for everything. Unlike a keycard, the TPC is an actual computer, say a laptop, which has the advantage of needing no special procedure to manage keys. The downside is, of course, that you actually need to carry that laptop everywhere you go, which may be problematic, especially in some corporate environments that restrict bringing your own devices.
export GNUPGHOME=$(mktemp -d)
gpg --generate-key
gpg --edit-key UID
key
command to select the first subkey, then copy it to
the keycard (you can also use the addcardkey
command to just
generate a new subkey directly on the keycard):
gpg> key 1
gpg> keytocard
save
command, which will
remove the local copy of the private key, so the keycard will be the
only copy of the secret key. Otherwise use the quit
command to
save the key on the keycard, but keep the secret key in your normal
keyring; answer "n" to "save changes?" and "y" to "quit without
saving?" . This way the keycard is a backup of your secret key.$GNUPGHOME
)--list-secret-keys
will show it as
sec>
(or ssb>
for subkeys) instead of the usual sec
keyword. If
the key is completely missing (for example, if you moved it to a LUKS
container), the #
sign is used instead. If you need to use a key from
a keycard backup, you simply do gpg --card-edit
with the key plugged
in, then type the fetch
command at the prompt to fetch the public key
that corresponds to the private key on the keycard (which stays on the
keycard). This is the same procedure as the one to use the secret key
on another
computer.
This article first appeared in the Linux Weekly News.
SC
) but also an
encryption subkey (marked E
), a separate signature key (S
), and two
authentication keys (marked A
) which I use as RSA keys to log into
servers using SSH, thanks to the
Monkeysphere project.
pub rsa4096/792152527B75921E 2009-05-29 [SC] [expires: 2018-04-19]
8DC901CE64146C048AD50FBB792152527B75921E
uid [ultimate] Antoine Beaupr <anarcat@anarc.at>
uid [ultimate] Antoine Beaupr <anarcat@koumbit.org>
uid [ultimate] Antoine Beaupr <anarcat@orangeseeds.org>
uid [ultimate] Antoine Beaupr <anarcat@debian.org>
sub rsa2048/B7F648FED2DF2587 2012-07-18 [A]
sub rsa2048/604E4B3EEE02855A 2012-07-20 [A]
sub rsa4096/A51D5B109C5A5581 2009-05-29 [E]
sub rsa2048/3EA1DDDDB261D97B 2017-08-23 [S]
All the subkeys (sub
) and identities (uid
) are bound by the main
certification key using cryptographic self-signatures. So while an
attacker stealing a private subkey can spoof signatures in my name or
authenticate to other servers, that key can always be revoked by the
main certification key. But if the certification key gets stolen, all
bets are off: the attacker can create or revoke identities or subkeys as
they wish. In a catastrophic scenario, an attacker could even steal the
key and remove your copies, taking complete control of the key, without
any possibility of recovery. Incidentally, this is why it is so
important to generate a revocation certificate and store it offline.
So by moving the certification key offline, we reduce the attack surface
on the OpenPGP trust chain: day-to-day keys (e.g. email encryption or
signature) can stay online but if they get stolen, the certification key
can revoke those keys without having to revoke the main certification
key as well. Note that a stolen encryption key is a different problem:
even if we revoke the encryption subkey, this will only affect future
encrypted messages. Previous messages will be readable by the attacker
with the stolen subkey even if that subkey gets revoked, so the benefits
of revoking encryption certificates are more limited.
--iter-time
argument when creating
a LUKS partition to increase key-derivation delay, which makes
brute-forcing much harder. Indeed, GnuPG 2.x doesn't
have a run-time option to configure the
key-derivation algorithm, although a
patch was introduced recently to make the
delay configurable at compile time in gpg-agent
, which is now
responsible for all secret key operations.
The downside of external volumes is complexity: GnuPG makes it difficult
to extract secrets out of its keyring, which makes the first setup
tricky and error-prone. This is easier in the 2.x series thanks to the
new storage system and the associated keygrip
files, but it still
requires arcane knowledge of GPG internals. It is also inconvenient to
use secret keys stored outside your main keyring when you actually do
need to use them, as GPG doesn't know where to find those keys anymore.
Another option is to set up a separate air-gapped system to perform
certification operations. An example is the PGP clean
room project,
which is a live system based on Debian and designed by DD Daniel Pocock
to operate an OpenPGP and X.509 certificate authority using commodity
hardware. The basic principle is to store the secrets on a different
machine that is never connected to the network and, therefore, not
exposed to attacks, at least in theory. I have personally discarded that
approach because I feel air-gapped systems provide a false sense of
security: data eventually does need to come in and out of the system,
somehow, even if only to propagate signatures out of the system, which
exposes the system to attacks.
System updates are similarly problematic: to keep the system secure,
timely security updates need to be deployed to the air-gapped system. A
common use pattern is to share data through USB keys, which introduce a
vulnerability where attacks like
BadUSB can infect the air-gapped
system. From there, there is a multitude of exotic ways of exfiltrating
the data using
LEDs,
infrared
cameras,
or the good old
TEMPEST
attack. I therefore concluded the complexity tradeoffs of an air-gapped
system are not worth it. Furthermore, the workflow for air-gapped
systems is complex: even though PGP clean room went a long way, it's
still lacking even simple scripts that allow signing or transferring
keys, which is a problem shared by the external LUKS storage approach.
keytocard
command
in the --edit-key
interface), whereas moving private key material to a
LUKS-encrypted device or air-gapped computer is more complex.
Keycards are also useful if you operate on multiple computers. A common
problem when using GnuPG on multiple machines is how to safely copy and
synchronize private key material among different devices, which
introduces new security problems. Indeed, a "good rule of thumb in a
forensics lab",
according
to Robert J. Hansen on the GnuPG mailing list, is to "store the minimum
personal data possible on your systems". Keycards provide the best of
both worlds here: you can use your private key on multiple computers
without actually storing it in multiple places. In fact, Mike Gerwitz
went as far as
saying:
For users that need their GPG key on multiple boxes, I consider a smartcard to be essential. Otherwise, the user is just furthering her risk of compromise.
Smartcards are useful. They ensure that the private half of your key is never on any hard disk or other general storage device, and therefore that it cannot possibly be stolen (because there's only one possible copy of it). Smartcards are a pain in the ass. They ensure that the private half of your key is never on any hard disk or other general storage device but instead sits in your wallet, so whenever you need to access it, you need to grab your wallet to be able to do so, which takes more effort than just firing up GnuPG. If your laptop doesn't have a builtin cardreader, you also need to fish the reader from your backpack or wherever, etc."Smartcards" here refer to older OpenPGP cards that relied on the IEC 7816 smartcard connectors and therefore needed a specially-built smartcard reader. Newer keycards simply use a standard USB connector. In any case, it's true that having an external device introduces new issues: attackers can steal your keycard, you can simply lose it, or wash it with your dirty laundry. A laptop or a computer can also be lost, of course, but it is much easier to lose a small USB keycard than a full laptop and I have yet to hear of someone shoving a full laptop into a washing machine. When you lose your keycard, unless a separate revocation certificate is available somewhere, you lose complete control of the key, which is catastrophic. But, even if you revoke the lost key, you need to create a new one, which involves rebuilding the web of trust for the key a rather expensive operation as it usually requires meeting other OpenPGP users in person to exchange fingerprints. You should therefore think about how to back up the certification key, which is a problem that already exists for online keys; of course, everyone has a revocation certificates and backups of their OpenPGP keys... right? In the keycard scenario, backups may be multiple keycards distributed geographically. Note that, contrary to an air-gapped system, a key generated on a keycard cannot be backed up, by design. For subkeys, this is not a problem as they do not need to be backed up (except encryption keys). But, for a certification key, this means users need to generate the key on the host and transfer it to the keycard, which means the host is expected to have enough entropy to generate cryptographic-strength random numbers, for example. Also consider the possibility of combining different approaches: you could, for example, use a keycard for day-to-day operation, but keep a backup of the certification key on a LUKS-encrypted offline volume. Keycards introduce a new element into the trust chain: you need to trust the keycard manufacturer to not have any hostile code in the key's firmware or hardware. In addition, you need to trust that the implementation is correct. Keycards are harder to update: the firmware may be deliberately inaccessible to the host for security reasons or may require special software to manipulate. Keycards may be slower than the CPU in performing certain operations because they are small embedded microcontrollers with limited computing power. Finally, keycards may encourage users to trust multiple machines with their secrets, which works against the "minimum personal data" principle. A completely different approach called the trusted physical console (TPC) does the opposite: instead of trying to get private key material onto all of those machines, just have them on a single machine that is used for everything. Unlike a keycard, the TPC is an actual computer, say a laptop, which has the advantage of needing no special procedure to manage keys. The downside is, of course, that you actually need to carry that laptop everywhere you go, which may be problematic, especially in some corporate environments that restrict bringing your own devices.
export GNUPGHOME=$(mktemp -d)
gpg --generate-key
gpg --edit-key UID
key
command to select the first subkey, then copy it to
the keycard (you can also use the addcardkey
command to just
generate a new subkey directly on the keycard):
gpg> key 1
gpg> keytocard
save
command, which will
remove the local copy of the private key, so the keycard will be the
only copy of the secret key. Otherwise use the quit
command to
save the key on the keycard, but keep the secret key in your normal
keyring; answer "n" to "save changes?" and "y" to "quit without
saving?" . This way the keycard is a backup of your secret key.$GNUPGHOME
)--list-secret-keys
will show it as
sec>
(or ssb>
for subkeys) instead of the usual sec
keyword. If
the key is completely missing (for example, if you moved it to a LUKS
container), the #
sign is used instead. If you need to use a key from
a keycard backup, you simply do gpg --card-edit
with the key plugged
in, then type the fetch
command at the prompt to fetch the public key
that corresponds to the private key on the keycard (which stays on the
keycard). This is the same procedure as the one to use the secret key
on another
computer.
This article first appeared in the Linux Weekly News.
system("ffmpeg -y -i $outputdir/$slug.ts -pass 1 -passlogfile ...");
inside an environment where the $outputdir
and $slug
variables are
set in a perl environment. That works, but it has some downsides; e.g.,
adding or removing options based on which codecs we're using is not so
easy. It would be much more flexible if the command lines were generated
dynamically based on requested output bandwidth and codecs, rather than
that they be hardcoded in the file. Case in point: currently there are
multiple versions of some of the backend scripts, that only differ in
details mostly the chosen codec on the ffmpeg command line.
Obviously this is suboptimal.
Instead, we want a way where video file formats can be autodetected, so
that I can just say "create a file that uses encoder etc settings of
this other file here". In addition, we also want a way where we can
say "create a file that uses encoder etc settings of this other file
here, except for these one or two options that I want to fine-tune
manually". When I first thought about doing that about a year ago, that
seemed complicated and not worth it or at least not to that
extent.
Enter Moose.
The Moose OO system for Perl 5 is an interesting way to do object
orientation in Perl. I knew Perl supports OO, and I had heard about
Moose, but never had looked into it, mostly because the standard perl OO
features were "good enough". Until now.
Moose has a concept of adding 'attributes' to objects. Attributes can be
set at object construction time, or can be accessed later on by way of
getter/setter functions, or even simply functions named after the
attribute itself (the default). For more complicated attributes, where
the value may not be known until some time after the object has been
created, Moose borrows the concept of "lazy" variables from Perl 6:
package Object;
use Moose;
has 'time' => (
is => 'rw',
builder => 'read_time',
lazy => 1,
);
sub read_time
return localtime();
The above object has an attribute 'time', which will not have a value
initially. However, upon first read, the 'localtime()' function will be
called, the result is cached, and then (and on all further calls of the
same function), the cached result will be returned. In addition, since
the attribute is read/write, the time can also be written to. In that
case, any cached value that may exist will be overwritten, and if no
cached value exists yet, the read_time function will never be called.
(it is also possible to clear values if needs be, so that the function
would be called again).
We use this with the following pattern:
package SReview::Video;
use Moose;
has 'url' => (
is => 'rw',
)
has 'video_codec' => (
is => 'rw',
builder => '_probe_videocodec',
lazy => 1,
);
has 'videodata' => (
is => 'bare',
reader => '_get_videodata',
builder => '_probe_videodata',
lazy => 1,
);
has 'probedata' => (
is => 'bare',
reader => '_get_probedata',
builder => '_probe',
lazy => 1,
);
sub _probe_videocodec
my $self = shift;
return $self->_get_videodata-> codec_name ;
sub _probe_videodata
my $self = shift;
if(!exists($self->_get_probedata-> streams ))
return ;
foreach my $stream(@ $self->_get_probedata-> streams )
if($stream-> codec_type eq "video")
return $stream;
return ;
sub _probe
my $self = shift;
open JSON, "ffprobe -print_format json -show_format -show_streams " . $self->url . " "
my $json = "";
while(<JSON>)
$json .= $_;
close JSON;
return decode_json($json);
The videodata
and probedata
attributes are internal-use only
attributes, and are therefore of the 'bare' type that is, they
cannot be read nor written to. However, we do add 'reader' functions
that can be used from inside the object, so that the object itself can
access them. These reader functions are generated, so they're not part
of the object source. The probedata
attribute's builder simply calls
ffprobe
with the right command-line arguments to retrieve data in JSON
format, and then decodes that JSON file.
Since the passed JSON file contains an array with (at least) two
streams one for video and one for audio and since the
ordering of those streams depends on the file and is therefore not
guaranteed, we have to loop over them. Since doing so in each and every
attribute of the file we might be interested in would be tedious, we add
a videodata
attribute that just returns the data for the first found
video stream (the actual source also contains a similar one for audio
streams).
So, if you create an SReview::Video
object and you pass it a filename
in the url
attribute, and then immediately run print
$object->video_codec
, then the object will
ffprobe
, and cache the (decoded) output for further use$object->video_codec('h264')
,
then the ffprobe
and most of the caching will be skipped, and instead
the h265
data will be returned as video codec name.
Okay, so with a reasonably small amount of code, we now have a bunch of
attributes that have defaults based on actual files but can be
overwritten when necessary. Useful, right? Well, you might also want to
care about the fact that sometimes you want to generate a video file
that uses the same codec settings of this other file here. That's
easy. First, we add another attribute:
has 'reference' => (
is => 'ro',
isa => 'SReview::Video',
predicate => 'has_reference'
);
which we then use in the _probe
method like so:
sub _probe
my $self = shift;
if($self->has_reference)
return $self->reference->_get_probedata;
# original code remains here
With that, we can create an object like so:
my $video = SReview::Video->new(url => 'file.ts');
my $generated = SReview::Video->new(url => 'file2.ts', reference => $video);
now if we ask the $generated
object what the value of its
video_codec
setting is without telling it ourselves first, it will use
the $video
object for its probed data, and use that.
That only misses generating the ffmpeg command line, but that's all
fairly straightforward and therefore left as an exercise to the reader.
Or you can cheat, and look it
up.
Next.